7 research outputs found

    Software Techniques to Mitigate Errors on Noisy Quantum Computers

    Get PDF
    Quantum computers are domain-specific accelerators that can provide a large speedup for important problems. Quantum computers with few tens of qubits have already been demonstrated, and machines with 100+ qubits are expected soon. These machines face significant reliability and scalability challenges. The high hardware error rates limit quantum computers. To enable quantum speedup, it is essential to mitigate hardware errors. Our first work exploits the variability in the error rates of qubits to steer more operations towards qubits with lower error rates and avoid error-prone qubits. Our second work looks at executing different versions of the programs tuned to cause diverse mistakes so that the machine is less vulnerable to correlated errors, thereby making it easier to infer the correct answer. Our third work looks at exploiting the state-dependent bias in measurement errors (state 1 is more error-prone than state 0) and dynamically flips the state of the qubit to measure the stronger state. We perform our evaluations on real quantum machines from IBM and demonstrate significant improvement in the overall system reliability.Ph.D

    The Dirty Secret of SSDs: Embodied Carbon

    Full text link
    Scalable Solid-State Drives (SSDs) have revolutionized the way we store and access our data across datacenters and handheld devices. Unfortunately, scaling technology can have a significant environmental impact. Across the globe, most semiconductor manufacturing use electricity that is generated from coal and natural gas. For instance, manufacturing a Gigabyte of Flash emits 0.16 Kg CO2_2 and is a significant fraction of the total carbon emission in the system. We estimate that manufacturing storage devices has resulted in 20 million metric tonnes of CO2_2 emissions in 2021 alone. To better understand this concern, this paper compares the sustainability trade-offs between Hard Disk Drives (HDDs) and SSDs and recommends methodologies to estimate the embodied carbon costs of the storage system. In this paper, we outline four possible strategies to make storage systems sustainable. First, this paper recommends directions that help select the right medium of storage (SSD vs HDD). Second, this paper proposes lifetime extension techniques for SSDs. Third, this paper advocates for effective and efficient recycling and reuse of high-density multi-level cell-based SSDs. Fourth, specifically for hand-held devices, this paper recommends leveraging elasticity in cloud storage.Comment: In the proceedings of the 1st Workshop on Sustainable Computer Systems Design and Implementation (HotCarbon 2022

    Synthesizing Quantum-Circuit Optimizers

    Full text link
    Near-term quantum computers are expected to work in an environment where each operation is noisy, with no error correction. Therefore, quantum-circuit optimizers are applied to minimize the number of noisy operations. Today, physicists are constantly experimenting with novel devices and architectures. For every new physical substrate and for every modification of a quantum computer, we need to modify or rewrite major pieces of the optimizer to run successful experiments. In this paper, we present QUESO, an efficient approach for automatically synthesizing a quantum-circuit optimizer for a given quantum device. For instance, in 1.2 minutes, QUESO can synthesize an optimizer with high-probability correctness guarantees for IBM computers that significantly outperforms leading compilers, such as IBM's Qiskit and TKET, on the majority (85%) of the circuits in a diverse benchmark suite. A number of theoretical and algorithmic insights underlie QUESO: (1) An algebraic approach for representing rewrite rules and their semantics. This facilitates reasoning about complex symbolic rewrite rules that are beyond the scope of existing techniques. (2) A fast approach for probabilistically verifying equivalence of quantum circuits by reducing the problem to a special form of polynomial identity testing. (3) A novel probabilistic data structure, called a polynomial identity filter (PIF), for efficiently synthesizing rewrite rules. (4) A beam-search-based algorithm that efficiently applies the synthesized symbolic rewrite rules to optimize quantum circuits.Comment: Full version of PLDI 2023 pape

    Scaling Qubit Readout with Hardware Efficient Machine Learning Architectures

    Full text link
    Reading a qubit is a fundamental operation in quantum computing. It translates quantum information into classical information enabling subsequent classification to assign the qubit states `0' or `1'. Unfortunately, qubit readout is one of the most error-prone and slowest operations on a superconducting quantum processor. On state-of-the-art superconducting quantum processors, readout errors can range from 1-10%. High readout accuracy is essential for enabling high fidelity for near-term noisy quantum computers and error-corrected quantum computers of the future. Prior works have used machine-learning-assisted single-shot qubit-state classification, where a deep neural network was used for more robust discrimination by compensating for crosstalk errors. However, the neural network size can limit the scalability of systems, especially if fast hardware discrimination is required. This state-of-the-art baseline design cannot be implemented on off-the-shelf FPGAs used for the control and readout of superconducting qubits in most systems, which increases the overall readout latency as discrimination has to be performed in software. In this work, we propose HERQULES, a scalable approach to improve qubit-state discrimination by using a hierarchy of matched filters in conjunction with a significantly smaller and scalable neural network for qubit-state discrimination. We achieve substantially higher readout accuracies (16.4% relative improvement) than the baseline with a scalable design that can be readily implemented on off-the-shelf FPGAs. We also show that HERQULES is more versatile and can support shorter readout durations than the baseline design without additional training overheads
    corecore